1 research outputs found

    Alias-Free Convnets: Fractional Shift Invariance via Polynomial Activations

    Full text link
    Although CNNs are believed to be invariant to translations, recent works have shown this is not the case, due to aliasing effects that stem from downsampling layers. The existing architectural solutions to prevent aliasing are partial since they do not solve these effects, that originate in non-linearities. We propose an extended anti-aliasing method that tackles both downsampling and non-linear layers, thus creating truly alias-free, shift-invariant CNNs. We show that the presented model is invariant to integer as well as fractional (i.e., sub-pixel) translations, thus outperforming other shift-invariant methods in terms of robustness to adversarial translations.Comment: The paper was accepted to CVPR 2023. Our code is available at https://github.com/hmichaeli/alias_free_convnets
    corecore